Calorimeter shower simulations are often the bottleneck in simulation time for particle physics detectors. A lot of effort is currently spent on optimizing generative architectures for specific detector geometries, which generalize poorly. We develop a geometry-aware autoregressive model on a range of calorimeter geometries such that the model learns to adapt its energy deposition depending on the size and position of the cells. This is a key proof-of-concept step towards building a model that can generalize to new unseen calorimeter geometries with little to no additional training. Such a model can replace the hundreds of generative models used for calorimeter simulation in a Large Hadron Collider experiment. For the study of future detectors, such a model will dramatically reduce the large upfront investment usually needed to generate simulations.
translated by 谷歌翻译
强大的增强学习(RL)考虑了在一组可能的环境参数值中最坏情况下表现良好的学习政策的问题。在现实世界环境中,选择可靠RL的可能值集可能是一项艰巨的任务。当指定该集合太狭窄时,代理将容易受到不称职的合理参数值的影响。如果规定过于广泛,则代理商将太谨慎。在本文中,我们提出了可行的对抗性鲁棒RL(FARR),这是一种自动确定环境参数值集的方法。 Farr隐式将可行的参数值定义为代理可以在足够的培训资源的情况下获得基准奖励的参数值。通过将该问题作为两人零和游戏的配方,Farr共同学习了对参数值的对抗分布,并具有可行的支持,并且在此可行参数集中进行了强大的策略。使用PSRO算法在这款FARR游戏中找到近似的NASH平衡,我们表明,接受FARR训练的代理人对可行的对抗性参数选择比现有的minimax,domain randanmization,域名和遗憾的目标更强大控制环境。
translated by 谷歌翻译
在竞争激烈的两种环境中,基于\ emph {double oracle(do)}算法的深度强化学习(RL)方法,例如\ emph {policy space响应oracles(psro)}和\ emph {任何时间psro(apsro)},迭代地将RL最佳响应策略添加到人群中。最终,这些人口策略的最佳混合物将近似于NASH平衡。但是,这些方法可能需要在收敛之前添加所有确定性策略。在这项工作中,我们介绍了\ emph {selfplay psro(sp-psro)},这种方法可在每次迭代中的种群中添加大致最佳的随机策略。SP-PSRO并不仅对对手的最少可剥削人口混合物添加确定性的最佳反应,而是学习了大致最佳的随机政策,并将其添加到人群中。结果,SPSRO从经验上倾向于比APSRO快得多,而且在许多游戏中,仅在几次迭代中收敛。
translated by 谷歌翻译
它是科学技术的基础,能够预测化学反应及其性质。为实现此类技能,重要的是要培养良好的化学反应表示,或者可以自动从数据中学习此类表示的良好深度学习架构。目前没有普遍和广泛采用的方法,可强健地代表化学反应。大多数现有方法患有一个或多个缺点,例如:(1)缺乏普遍性; (2)缺乏稳健性; (3)缺乏可解释性;或(4)需要过度手动预处理。在这里,我们利用基于图的分子结构表示,以开发和测试一个超图注意神经网络方法,以一次解决反应表示和性能 - 预测问题,减轻了上述缺点。我们使用三个独立数据集化学反应评估三个实验中的这种超照片表示。在所有实验中,基于超图的方法与其他表示和它们相应的化学反应模型相匹配或优于相应的模型,同时产生可解释的多级表示。
translated by 谷歌翻译
大型强子撞机的不稳定沉重粒子的创造是解决物理学中最深处的最深处的最直接方式。碰撞通常产生可变尺寸的观察粒子,其具有固有的歧义,使观察到的颗粒的分配复杂于重质颗粒的腐烂产物。在物理界解决这些挑战的当前策略忽略了腐烂产品的物理对称,并考虑所有可能的分配排列,并不扩展到复杂的配置。基于注意的序列建模的深度学习方法在自然语言处理中取得了最先进的性能,但它们缺乏内置机制来处理物理集分配问题中发现的独特对称性。我们介绍了一种建构对称保护的新方法,用于保护对称保护的网络,反映问题的自然侵略者,以有效地找到任务而不评估所有排列。这种通用方法适用于任意复杂的配置,并且显着优于当前方法,提高了在典型的基准问题上的19 \%-35 \%之间的重建效率,同时在最复杂的事件上将推理时间减少两到五个数量级,使得许多重要和以前顽固的病例易腐烂。包含常规库的完整代码存储库,使用的特定配置和完整的数据集发布,是在https://github.com/alexanders101/spanet的avawaiable
translated by 谷歌翻译
在大型强子对撞机上大量生产的顶级夸克,具有复杂的探测器签名,需要特殊的重建技术。最常见的衰减模式是“全杰”频道,导致6月份的最终状态,由于可能的排列数量大量,因此在$ pp $碰撞中尤其难以重建。我们使用广义注意机制基于神经网络提出了一种新的问题,我们称之为对称性保留注意力网络(SPA-NET)。我们训练一个这样的网络,以明确地识别每个顶级夸克的衰减产品,而无需组合爆炸作为该技术的力量的一个例子。这种方法大大优于现有的最新方法,正确分配了所有喷气机,以$ 93.0%的价格分配了所有喷气机$ 6 $ -JET,$ 87.8%的$ 7 $ -JET $和$ 82.6%的$ \ geq 8 $ -JET活动。
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
Training a very deep neural network is a challenging task, as the deeper a neural network is, the more non-linear it is. We compare the performances of various preconditioned Langevin algorithms with their non-Langevin counterparts for the training of neural networks of increasing depth. For shallow neural networks, Langevin algorithms do not lead to any improvement, however the deeper the network is and the greater are the gains provided by Langevin algorithms. Adding noise to the gradient descent allows to escape from local traps, which are more frequent for very deep neural networks. Following this heuristic we introduce a new Langevin algorithm called Layer Langevin, which consists in adding Langevin noise only to the weights associated to the deepest layers. We then prove the benefits of Langevin and Layer Langevin algorithms for the training of popular deep residual architectures for image classification.
translated by 谷歌翻译
Machine learning (ML) models can leak information about users, and differential privacy (DP) provides a rigorous way to bound that leakage under a given budget. This DP budget can be regarded as a new type of compute resource in workloads of multiple ML models training on user data. Once it is used, the DP budget is forever consumed. Therefore, it is crucial to allocate it most efficiently to train as many models as possible. This paper presents the scheduler for privacy that optimizes for efficiency. We formulate privacy scheduling as a new type of multidimensional knapsack problem, called privacy knapsack, which maximizes DP budget efficiency. We show that privacy knapsack is NP-hard, hence practical algorithms are necessarily approximate. We develop an approximation algorithm for privacy knapsack, DPK, and evaluate it on microbenchmarks and on a new, synthetic private-ML workload we developed from the Alibaba ML cluster trace. We show that DPK: (1) often approaches the efficiency-optimal schedule, (2) consistently schedules more tasks compared to a state-of-the-art privacy scheduling algorithm that focused on fairness (1.3-1.7x in Alibaba, 1.0-2.6x in microbenchmarks), but (3) sacrifices some level of fairness for efficiency. Therefore, using DPK, DP ML operators should be able to train more models on the same amount of user data while offering the same privacy guarantee to their users.
translated by 谷歌翻译
Imperfect information games (IIG) are games in which each player only partially observes the current game state. We study how to learn $\epsilon$-optimal strategies in a zero-sum IIG through self-play with trajectory feedback. We give a problem-independent lower bound $\mathcal{O}(H(A_{\mathcal{X}}+B_{\mathcal{Y}})/\epsilon^2)$ on the required number of realizations to learn these strategies with high probability, where $H$ is the length of the game, $A_{\mathcal{X}}$ and $B_{\mathcal{Y}}$ are the total number of actions for the two players. We also propose two Follow the Regularize leader (FTRL) algorithms for this setting: Balanced-FTRL which matches this lower bound, but requires the knowledge of the information set structure beforehand to define the regularization; and Adaptive-FTRL which needs $\mathcal{O}(H^2(A_{\mathcal{X}}+B_{\mathcal{Y}})/\epsilon^2)$ plays without this requirement by progressively adapting the regularization to the observations.
translated by 谷歌翻译